77 research outputs found
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
Response of a Hexagonal Granular Packing under a Localized External Force
We study the response of a two-dimensional hexagonal packing of rigid,
frictionless spherical grains due to a vertically downward point force on a
single grain at the top layer. We use a statistical approach, where each
configuration of the contact forces is equally likely. We show that this
problem is equivalent to a correlated -model. We find that the response
displays two peaks which lie precisely on the downward lattice directions
emanating from the point of application of the force. With increasing depth,
the magnitude of the peaks decreases, and a central peak develops. On the
bottom of the pile, only the middle peak persists. The response of different
system sizes exhibits self-similarity.Comment: 4 pages, 9 figures, 2 colour figure, plot of standard deviation
added, discussion expande
A geometrical analysis of global stability in trained feedback networks
Recurrent neural networks have been extensively studied in the context of
neuroscience and machine learning due to their ability to implement complex
computations. While substantial progress in designing effective learning
algorithms has been achieved in the last years, a full understanding of trained
recurrent networks is still lacking. Specifically, the mechanisms that allow
computations to emerge from the underlying recurrent dynamics are largely
unknown. Here we focus on a simple, yet underexplored computational setup: a
feedback architecture trained to associate a stationary output to a stationary
input. As a starting point, we derive an approximate analytical description of
global dynamics in trained networks which assumes uncorrelated connectivity
weights in the feedback and in the random bulk. The resulting mean-field theory
suggests that the task admits several classes of solutions, which imply
different stability properties. Different classes are characterized in terms of
the geometrical arrangement of the readout with respect to the input vectors,
defined in the high-dimensional space spanned by the network population. We
find that such approximate theoretical approach can be used to understand how
standard training techniques implement the input-output task in finite-size
feedback networks. In particular, our simplified description captures the local
and the global stability properties of the target solution, and thus predicts
training performance
Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks
Networks of randomly connected neurons are among the most popular models in
theoretical neuroscience. The connectivity between neurons in the cortex is
however not fully random, the simplest and most prominent deviation from
randomness found in experimental data being the overrepresentation of
bidirectional connections among pyramidal cells. Using numerical and analytical
methods, we investigated the effects of partially symmetric connectivity on
dynamics in networks of rate units. We considered the two dynamical regimes
exhibited by random neural networks: the weak-coupling regime, where the firing
activity decays to a single fixed point unless the network is stimulated, and
the strong-coupling or chaotic regime, characterized by internally generated
fluctuating firing rates. In the weak-coupling regime, we computed analytically
for an arbitrary degree of symmetry the auto-correlation of network activity in
presence of external noise. In the chaotic regime, we performed simulations to
determine the timescale of the intrinsic fluctuations. In both cases, symmetry
increases the characteristic asymptotic decay time of the autocorrelation
function and therefore slows down the dynamics in the network.Comment: 17 pages, 7 figure
Instability to a heterogeneous oscillatory state in randomly connected recurrent networks with delayed interactions
Oscillatory dynamics are ubiquitous in biological networks. Possible sources
of oscillations are well understood in low-dimensional systems, but have not
been fully explored in high-dimensional networks. Here we study large networks
consisting of randomly coupled rate units. We identify a novel type of
bifurcation in which a continuous part of the eigenvalue spectrum of the linear
stability matrix crosses the instability line at non-zero-frequency. This
bifurcation occurs when the interactions are delayed and partially
anti-symmetric, and leads to a heterogeneous oscillatory state in which
oscillations are apparent in the activity of individual units, but not on the
population-average level
The computational role of structure in neural activity and connectivity
One major challenge of neuroscience is finding interesting structures in a
seemingly disorganized neural activity. Often these structures have
computational implications that help to understand the functional role of a
particular brain area. Here we outline a unified approach to characterize these
structures by inspecting the representational geometry and the modularity
properties of the recorded activity, and show that this approach can also
reveal structures in connectivity. We start by setting up a general framework
for determining geometry and modularity in activity and connectivity and
relating these properties with computations performed by the network. We then
use this framework to review the types of structure found in recent works on
model networks performing three classes of computations
Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity
The current exponential rise in recording capacity calls for new approaches
for analysing and interpreting neural data. Effective dimensionality has
emerged as a key concept for describing neural activity at the collective
level, yet different studies rely on a variety of definitions of it. Here we
focus on the complementary notions of intrinsic and embedding dimensionality,
and argue that they provide a useful framework for extracting computational
principles from data. Reviewing recent works, we propose that the intrinsic
dimensionality reflects information about the latent variables encoded in
collective activity, while embedding dimensionality reveals the manner in which
this information is processed. Network models form an ideal substrate for
testing more specifically the hypotheses on the computational principles
reflected through intrinsic and embedding dimensionality
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons
Aligned and oblique dynamics in recurrent neural networks
The relation between neural activity and behaviorally relevant variables is
at the heart of neuroscience research. When strong, this relation is termed a
neural representation. There is increasing evidence, however, for partial
dissociations between activity in an area and relevant external variables.
While many explanations have been proposed, a theoretical framework for the
relationship between external and internal variables is lacking. Here, we
utilize recurrent neural networks (RNNs) to explore the question of when and
how neural dynamics and the network's output are related from a geometrical
point of view. We find that RNNs can operate in two regimes: dynamics can
either be aligned with the directions that generate output variables, or
oblique to them. We show that the magnitude of the readout weights can serve as
a control knob between the regimes. Importantly, these regimes are functionally
distinct. Oblique networks are more heterogeneous and suppress noise in their
output directions. They are furthermore more robust to perturbations along the
output directions. Finally, we show that the two regimes can be dissociated in
neural recordings. Altogether, our results open a new perspective for
interpreting neural activity by relating network dynamics and their output.Comment: 52 pages, 29 figures, submitted for revie
- …